ChatGPT Accused of Exploitative Conditions for Kenyan Moderators
A newly released report has accused OpenAI – an artificial intelligence research firm – of paying Kenyan workers less than $2 per hour to help filter content for the ChatGPT chatbot.
The “chatbot” can generate text on almost any topic or theme while sourcing information from across the internet, which can be rife with toxicity and bias.
The report – released on Wednesday, 18 January, via an international publication – detailed how Kenyans employed by Sama, a North American outsourcing firm, were required to “label and filter out toxic data from ChatGPT’s training dataset.”
To do so, the employees were required to read graphic details of harmful content while being paid between $1.32 and $2 an hour, per seniority and performance. According to the report, OpenAI agreed to pay Sama an hourly rate of $12.50 – up to nine times the amount the moderators received.
One employee, who requested to remain unnamed to protect his livelihood, said that he “suffered from recurring visions after reading a graphic description.”
“That was torture,” he added.
Sama claims that “wellness counsellors” are provided for their employees, but all four inside sources said that the counselling sessions were “unhelpful and rare”.
The company eventually cancelled its contract with OpenAI in February 2022, due to the traumatic nature of the work – eight months earlier than originally agreed upon.
Image Credit: Source